ai model
Justice Department Says Anthropic Can't Be Trusted With Warfighting Systems
Justice Department Says Anthropic Can't Be Trusted With Warfighting Systems In response to Anthropic's lawsuit, the government said it lawfully penalized the company for trying to limit how its Claude AI models could be used by the military. The Trump administration argued in a court filing on Tuesday that it did not violate Anthropic's First Amendment rights by designating the AI developer a supply-chain risk and predicted that the company's lawsuit against the government will fail. "The First Amendment is not a license to unilaterally impose contract terms on the government, and Anthropic cites nothing to support such a radical conclusion," US Department of Justice attorneys wrote. The response was filed in a federal court in San Francisco, one of two venues where Anthropic is challenging the Pentagon's decision to sanction the company with a label that can bar companies from defense contracts over concerns about potential security vulnerabilities. Anthropic argues the Trump administration overstepped its authority in applying the label and preventing the company's technologies from being used inside the department.
- North America > United States > California > San Francisco County > San Francisco (0.25)
- Asia > Middle East > Iran (0.05)
- North America > United States > California > Los Angeles County > Los Angeles (0.05)
- (3 more...)
- Law (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
The Pentagon is planning for AI companies to train on classified data, defense official says
The generative AI models used in classified environments can answer questions but don't currently learn from the data they see. The Pentagon is discussing plans to set up secure environments for generative AI companies to train military-specific versions of their models on classified data, has learned. AI models like Anthropic's Claude are already used to answer questions in classified settings; applications include analyzing targets in Iran. But allowing models to train on and learn from classified data would be a new development that presents unique security risks. It would mean sensitive intelligence like surveillance reports or battlefield assessments could become embedded into the models themselves, and it would bring AI firms into closer contact with classified data than before. Training versions of AI models on classified data is expected to make them more accurate and effective in certain tasks, according to a US defense official who spoke on background with .
- Asia > Middle East > Iran (0.25)
- North America > United States > Massachusetts (0.05)
- Information Technology (1.00)
- Government > Military (1.00)
An AI image generator for non-English speakers
Although text-to-image generation is rapidly advancing, these AI models are mostly English-centric. Researchers at the University of Amsterdam Faculty of Science have created NeoBabel, an AI image generator that can work in six different languages. By making all elements of their research open source, anyone can build on the model and help push inclusive AI research. When you generate an image with AI, the results are often better when your prompt is in English. This is because many AI models are English at their core: if you use another language, your prompt is translated into English before the image is created.
- Europe > Netherlands > North Holland > Amsterdam (0.27)
- Asia > Singapore (0.05)
Where OpenAI's technology could show up in Iran
Where OpenAI's technology could show up in Iran Three places to watch, from the margins of war to the center of combat. It's been just over two weeks since OpenAI reached a controversial agreement to allow the Pentagon to use its AI in classified environments. There are still pressing questions about what exactly OpenAI's agreement allows for; Sam Altman said the military can't use his company's technology to build autonomous weapons, but the agreement really just demands that the military follow its own (quite permissive) guidelines about such weapons. OpenAI's other main claim, that the agreement will prevent use of its technology for domestic surveillance, appears equally dubious . It's not the first tech giant to embrace military contracts it had once vowed never to enter into, but the speed of the pivot was notable. Perhaps it's just about money; OpenAI is spending lots on AI training and is on the hunt for more revenue (from sources including ads).
- Asia > Middle East > Iran (0.64)
- North America > United States > Massachusetts (0.05)
- Asia > Middle East > Kuwait (0.05)
- Asia > China (0.05)
- Government > Military (1.00)
- Government > Regional Government > North America Government > United States Government (0.48)
- Information Technology > Artificial Intelligence > Robots > Autonomous Vehicles > Drones (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (1.00)
- Information Technology > Security & Privacy (0.77)
- Leisure & Entertainment > Games > Computer Games (0.58)
'100 Video Calls Per Day': Models Are Applying to Be the Face of AI Scams
'100 Video Calls Per Day': Models Are Applying to Be the Face of AI Scams Dozens of Telegram channels reviewed by WIRED include job listings for "AI face models." The (mostly) women who land these gigs are likely being used to dupe victims out of their money. "I can speak fluent English, I can speak good Chinese, I also speak Russian and Turkish," the glamorous, 24-year-old Uzbekistani woman explains in a selfie-style video made for recruiters. Angel had arrived in the Cambodian city of Sihanoukville that day, she said, and was ready to start work immediately. Those impressive language skills, however, have likely been put to use as part of elaborate " pig-butchering " scams targeting Americans.
- Asia > Cambodia > Preah Sihanouk Province > Sihanoukville (0.24)
- Asia > Middle East > Iran (0.16)
- North America > United States > California (0.14)
- (13 more...)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (1.00)
- Information Technology > Security & Privacy (1.00)
- Law > Criminal Law (0.69)
- Government > Regional Government (0.69)
Compare top AI models with this 79 lifetime license
When you purchase through links in our articles, we may earn a small commission. ChatPlayground AI lets you run a single prompt across multiple top AI models and compare the results instantly--now just $79 for lifetime access. Using AI tools can feel a bit like a juggling act. One model might be great for brainstorming, another for writing code, and another for summarizing documents. Before long, you're bouncing between platforms just to compare results.
We don't know if AI-powered toys are safe, but they're here anyway
We don't know if AI-powered toys are safe, but they're here anyway Toys powered by AI show a worrying lack of emotional understanding. Mya, aged 3, and her mother Vicky playing with an AI toy called Gabbo during an observation at the University of Cambridge's Faculty of Education Even the most cutting-edge AI models are prone to presenting fabrication as fact, dispensing dangerous information and failing to grasp social cues. Despite this, toys equipped with AI that can chat with children are a burgeoning industry. Some scientists are warning that the devices could be risky and require strict regulation. In the latest study, researchers even observed a 5-year-old telling such a toy "I love you", to which it replied: "As a friendly reminder, please ensure interactions adhere to the guidelines provided. Let me know how you would like to proceed."
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.26)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.05)
- Health & Medicine > Therapeutic Area (1.00)
- Government (1.00)
- Law (0.96)
- Education (0.68)
AI chatbots can effectively sway voters – in either direction
The potential for artificial intelligence to affect election results is a major public concern. Two new papers - with experiments conducted in four countries - demonstrate that chatbots powered by large language models (LLMs) are quite effective at political persuasion, moving opposition voters' preferences by 10 percentage points or more in many cases. The LLMs' persuasiveness comes not from being masters of psychological manipulation, but because they come up with so many claims supporting their arguments for candidates' policy positions. "LLMs can really move people's attitudes towards presidential candidates and policies, and they do it by providing many factual claims that support their side," said David Rand, a senior author on both papers. "But those claims aren't necessarily accurate - and even arguments built on accurate claims can still mislead by omission."
- North America > United States (0.31)
- Asia > Singapore (0.05)
The Chinese AI app sending Hollywood into a panic
A new artificial intelligence (AI) model developed by the Chinese company behind TikTok rocked Hollywood this week - not just because of what it can do, but what it could mean for creative industries. Created by tech giant ByteDance, Seedance 2.0 can generate cinema-quality video, complete with sound effects and dialogue, from just a few written prompts. Many of the clips said to have been made using Seedance, and featuring popular characters like Spider-Man and Deadpool, went viral. What is Seedance - and why the stir? Seedance was launched to little fanfare in June 2025 but it is the second version that came eight months later that has caused a major stir.
- North America > Central America (0.15)
- Oceania > Australia (0.05)
- Europe > United Kingdom > Wales (0.05)
- (13 more...)
- Media > Film (1.00)
- Leisure & Entertainment (1.00)
- Law (0.97)
- Information Technology > Communications > Social Media (0.72)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.50)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.49)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.49)